-
-
Notifications
You must be signed in to change notification settings - Fork 514
psycopg2.OperationalError: server closed the connection unexpectedly #829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
also, with psycopg2 on python2 everething is ok |
try keeping the connection alive. instead of:
try:
|
result is same |
I'm having the same issue. For now as a workaround had to create wrapper around connection object with used methods to hook them. In cursor() method had to do a |
I've played a bit with this issue, which is reproducible. Using the following script: import time
import threading
import multiprocessing
from pprint import pprint
from psycopg2 import connect
sql = 'select pg_backend_pid()'
class DAL():
def __init__(self, uri):
self.conn = connect("")
self.cursor = self.conn.cursor()
def executesql(self, sql):
self.cursor.execute(sql)
return self.cursor.fetchall()
def thread():
db = DAL('')
i = 3
while i > 0:
pprint(dict(
function='thread:run', executesql='%s' % db.executesql(sql)[0][0]))
time.sleep(1)
i -= 1
def process():
pprint(dict(function='process:begin'))
time.sleep(2)
pprint(dict(function='process:end'))
def main():
pprint(dict(function='main:begin'))
t = threading.Thread(target=thread, name='mythread')
t.start()
time.sleep(1)
multiprocessing.Process(target=process, name='myprocess').start()
t.join()
pprint(dict(function='main:end'))
if __name__ == '__main__':
main() The script has the following output, on Py 2.7 and 3.6:
Saving the output of strace in two different files:
after making the pid numbers uniform in the files (P0, P1, P2), extracting the relevant part (omitting what's before the connection to the db): python 2 strace
python 3 strace
I see here that on Python 3, the process P2, politely, closes the connection belonging to P1 (the Similar behaviour can be seen in a run with debug enabled: on Python 3 the trace would be something like:
So, on Py3 it seems the objects undergo GC, which for the connection sends a close command to an open FD. Maybe the thing can be fixed by making sure deleting the object doesn't call |
Unrelated processes close the FD of the connection. This happens in Python 3.6 but not 2.7. Let's see if travis shows where else it fails...
Still hitting this issue @ 2.8.3... heavy multiprocessing use. Any particular info you'd like? Any hints? will temporarily disabling GC help? |
@leongold This bug is fixed. You are not hitting this bug, maybe another one. Please provide a test to reproduce your problem, thank you. |
I'm getting this as well on |
@yshahak This might even mean that your server closed the connection. Did you check in the server logs if that's what happened? This bug is about an issue for which on python 3 processes closed each other's connections. If you produce me an example demonstrating it's the client fault, then we can do something. Missing it I assume it's exactly what the message says: the server closed your connection and you can't do much about it client side. |
@dvarrazzo I'm not entirely sure whether the test case fits exactly. I'll try to find the time to write my own test case, however in the meanwhile:
I'm not sure what causes this, but what's apparent to me is that living an idle for too long causes it to silently timeout/disconnect. In your reply to @yshahak you've mentioned it might be indeed the server closing the client's connection; I don't believe that is the case. I believe the connection is already closed by that time. |
@leongold remember that you cannot share connections across processes. If you open a connection, fork, and execute it from the forked process, it won't work. |
@dvarrazzo each subprocess instantiates its own connection... which also leads me to believe subprocessing isn't relevant here... could it be there's simply an issue of idle connections silently disconnecting? edit: server logs has a bunch of:
edit 2: another possible important tidbit: my architecture was working flawlessly for months; this started to occur when the time period between connection instantiation to first INSERT grew by a couple of minutes. and, as aformentioned, simply creating a new connection before INSERT'ing works around it. |
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly Can someone help me out on this? |
This bug is about an issue for which on python 3 processes closed each other's connections. Maybe I'm facing this issue while uploading bulk data using multiprocessing library. I'm on psycopg2 version 2.8. is any one knows the solution here ? |
I encountered this while using my SQLAlchemy connector wrappers. The high-level cause was too much data in Since one typically doesn't have access to SQLAlchemy internals, what @dimandzhi recommended with |
Or just to your server being configured with a |
this code
raises following exception
but with pg8000 everething is ok
The text was updated successfully, but these errors were encountered: